34 research outputs found

    Exploring the Capacity of Open, Linked Data Sources to Assess Adverse Drug Reaction Signals

    Get PDF
    Abstract. In this work, we explore the capacity of open, linked data sources to assess adverse drug reaction (ADR) signals. Our study is based on a set of drugrelated Bio2RDF data sources and three reference datasets, containing both positive and negative ADR signals, which were used for benchmarking. We present the overall approach for this assessment and refer to some early findings based on the analysis performed so far

    Mining Social Media for Perceptions and Trends on HIV Pre-Exposure Prophylaxis

    Get PDF
    Pre-Exposure Prophylaxis (PrEP) is an approach for preventing the human immunodeficiency virus (HIV), which entails the administration of antiretroviral medication to high-risk seronegative persons. If taken correctly, PrEP can reduce HIV infection risk by more than 90%. The aim of this study was to identify and examine PrEP-related perceptions and trends discussed on Twitter. Using open-source technologies, text-mining and interactive visualisation techniques, a comprehensive data gathering and analytics Web-based platform was developed to facilitate the study objectives. Our results demonstrate that monitoring of PrEP-related discussions on Twitter can be detected over time and valuable insights can be obtained concerning issues of PrEP awareness, expressed opinions, perceived barriers and key discussion points on its adoption. The proposed platform could support public-health professionals and policy makers in PrEP monitoring, facilitating informed decision making and strategy planning for efficient HIV combination prevention

    Semantic Integration of Cervical Cancer Data Repositories to Facilitate Multicenter Association Studies: The ASSIST Approach

    Get PDF
    The current work addresses the unifi cation of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifi es multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing fl exibility by allowing the formation of study groups “on demand” and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profi le) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the defi nition and representation of the disease’s medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains

    Computational Advances in Drug Safety: Systematic and Mapping Review of Knowledge Engineering Based Approaches

    Get PDF
    Drug Safety (DS) is a domain with significant public health and social impact. Knowledge Engineering (KE) is the Computer Science discipline elaborating on methods and tools for developing “knowledge-intensive” systems, depending on a conceptual “knowledge” schema and some kind of “reasoning” process. The present systematic and mapping review aims to investigate KE-based approaches employed for DS and highlight the introduced added value as well as trends and possible gaps in the domain. Journal articles published between 2006 and 2017 were retrieved from PubMed/MEDLINE and Web of Science¼ (873 in total) and filtered based on a comprehensive set of inclusion/exclusion criteria. The 80 finally selected articles were reviewed on full-text, while the mapping process relied on a set of concrete criteria (concerning specific KE and DS core activities, special DS topics, employed data sources, reference ontologies/terminologies, and computational methods, etc.). The analysis results are publicly available as online interactive analytics graphs. The review clearly depicted increased use of KE approaches for DS. The collected data illustrate the use of KE for various DS aspects, such as Adverse Drug Event (ADE) information collection, detection, and assessment. Moreover, the quantified analysis of using KE for the respective DS core activities highlighted room for intensifying research on KE for ADE monitoring, prevention and reporting. Finally, the assessed use of the various data sources for DS special topics demonstrated extensive use of dominant data sources for DS surveillance, i.e., Spontaneous Reporting Systems, but also increasing interest in the use of emerging data sources, e.g., observational healthcare databases, biochemical/genetic databases, and social media. Various exemplar applications were identified with promising results, e.g., improvement in Adverse Drug Reaction (ADR) prediction, detection of drug interactions, and novel ADE profiles related with specific mechanisms of action, etc. Nevertheless, since the reviewed studies mostly concerned proof-of-concept implementations, more intense research is required to increase the maturity level that is necessary for KE approaches to reach routine DS practice. In conclusion, we argue that efficiently addressing DS data analytics and management challenges requires the introduction of high-throughput KE-based methods for effective knowledge discovery and management, resulting ultimately, in the establishment of a continuous learning DS system

    Semantic Integration of Cervical Cancer Data Repositories to Facilitate Multicenter Association Studies: The ASSIST Approach

    Get PDF
    The current work addresses the unification of Electronic Health Records related to cervical cancer into a single medical knowledge source, in the context of the EU-funded ASSIST research project. The project aims to facilitate the research for cervical precancer and cancer through a system that virtually unifies multiple patient record repositories, physically located in different medical centers/hospitals, thus, increasing flexibility by allowing the formation of study groups “on demand” and by recycling patient records in new studies. To this end, ASSIST uses semantic technologies to translate all medical entities (such as patient examination results, history, habits, genetic profile) and represent them in a common form, encoded in the ASSIST Cervical Cancer Ontology. The current paper presents the knowledge elicitation approach followed, towards the definition and representation of the disease’s medical concepts and rules that constitute the basis for the ASSIST Cervical Cancer Ontology. The proposed approach constitutes a paradigm for semantic integration of heterogeneous clinical data that may be applicable to other biomedical application domains

    Clinical validation of an algorithm for rapid and accurate automated segmentation of intracoronary optical coherence tomography images

    Get PDF
    Objectives: The analysis of intracoronary optical coherence tomography (OCT) images is based on manual identification of the lumen contours and relevant structures. However, manual image segmentation is a cumbersome and time-consuming process, subject to significant intra- and inter-observer variability. This study aims to present and validate a fully-automated method for segmentation of intracoronary OCT images. Methods: We studied 20 coronary arteries (mean length = 39.7 ± 10.0 mm) from 20 patients who underwent a clinically-indicated cardiac catheterization. The OCT images (n = 1812) were segmented manually, as well as with a fully-automated approach. A semi-automated variation of the fully-automated algorithm was also applied. Using certain lumen size and lumen shape characteristics, the fully- and semi-automated segmentation algorithms were validated over manual segmentation, which was considered as the gold standard. Results: Linear regression and Bland–Altman analysis demonstrated that both the fully-automated and semiautomated segmentation had a very high agreement with the manual segmentation, with the semi-automated approach being slightly more accurate than the fully-automated method. The fully-automated and semiautomated OCT segmentation reduced the analysis time by more than 97% and 86%, respectively, compared to manual segmentation. Conclusions: In the current work we validated a fully-automated OCT segmentation algorithm, as well as a semiautomated variation of it in an extensive “real-life” dataset of OCT images. The study showed that our algorithm can perform rapid and reliable segmentation of OCT images

    Comprehensive user requirements engineering methodology for secure and interoperable health data exchange

    Get PDF
    Background: Increased digitalization of healthcare comes along with the cost of cybercrime proliferation. This results to patients' and healthcare providers' skepticism to adopt Health Information Technologies (HIT). In Europe, this shortcoming hampers efficient cross-border health data exchange, which requires a holistic, secure and interoperable framework. This study aimed to provide the foundations for designing a secure and interoperable toolkit for cross-border health data exchange within the European Union (EU), conducted in the scope of the KONFIDO project. Particularly, we present our user requirements engineering methodology and the obtained results, driving the technical design of the KONFIDO toolkit. Methods: Our methodology relied on four pillars: (a) a gap analysis study, reviewing a range of relevant projects/initiatives, technologies as well as cybersecurity strategies for HIT interoperability and cybersecurity; (b) the definition of user scenarios with major focus on cross-border health data exchange in the three pilot countries of the project; (c) a user requirements elicitation phase containing a threat analysis of the business processes entailed in the user scenarios, and (d) surveying and discussing with key stakeholders, aiming to validate the obtained outcomes and identify barriers and facilitators for HIT adoption linked with cybersecurity and interoperability. Results: According to the gap analysis outcomes, full adherence with information security standards is currently not universally met. Sustainability plans shall be defined for adapting existing/evolving frameworks to the state-of-the-art. Overall, lack of integration in a holistic security approach was clearly identified. For each user scenario, we concluded with a comprehensive workflow, highlighting challenges and open issues for their application in our pilot sites. The threat analysis resulted in a set of 30 user goals in total, documented in detail. Finally, indicative barriers of HIT acceptance include lack of awareness regarding HIT risks and legislations, lack of a security-oriented culture and management commitment, as well as usability constraints, while important facilitators concern the adoption of standards and current efforts for a common EU legislation framework. Conclusions: Our study provides important insights to address secure and interoperable health data exchange, while our methodological framework constitutes a paradigm for investigating diverse cybersecurity-related risks in the health sector

    Accurate and reproducible reconstruction of coronary arteries and endothelial shear stress calculation using 3D OCT: Comparative study to 3D IVUS and 3D QCA

    Get PDF
    Background: Geometrically-correct 3D OCT is a new imaging modality with the potential to investigate the association of local hemodynamic microenvironment with OCT-derived high-risk features. We aimed to describe the methodology of 3D OCT and investigate the accuracy, inter- and intra-observer agreement of 3D OCT in reconstructing coronary arteries and calculating ESS, using 3D IVUS and 3D QCA as references. Methods-Results: 35 coronary artery segments derived from 30 patients were reconstructed in 3D space using 3D OCT. 3D OCT was validated against 3D IVUS and 3D QCA. The agreement in artery reconstruction among 3D OCT, 3D IVUS and 3D QCA was assessed in 3-mm-long subsegments using lumen morphometry and ESS parameters. The inter- and intra-observer agreement of 3D OCT, 3D IVUS and 3D QCA were assessed in a representative sample of 61 subsegments (n Π5 arteries). The data processing times for each reconstruction methodology were also calculated. There was a very high agreement between 3D OCT vs. 3D IVUS and 3D OCT vs. 3D QCA in terms of total reconstructed artery length and volume, as well as in terms of segmental morphometric and ESS metrics with mean differences close to zero and narrow limits of agreement (BlandeAltman analysis). 3D OCT exhibited excellent inter- and intra-observer agreement. The analysis time with 3D OCT was significantly lower compared to 3D IVUS. Conclusions: Geometrically-correct 3D OCT is a feasible, accurate and reproducible 3D reconstruction technique that can perform reliable ESS calculations in coronary arteries

    A Multiagent System for Integrated Detection of Pharmacovigilance Signals

    No full text
    International audiencePharmacovigilance is the scientific discipline that copes with the continuous assessment of the safety profile of marketed drugs. This assessment relies on diverse data sources, which are routinely analysed to identify the so-called “signals”, i.e. potential associations between drugs and adverse effects, that are unknown or incompletely documented. Various computational methods have been proposed to support domain experts in signal detection. However, recent comparative studies illustrated that current methods exhibit high false-positive rates, significantly variable performance across different datasets used for analysis and events of interest, but also complementarity in their outcomes. In this regard, in order to reinforce accurate and timely signal detection, we elaborated through an agent-based approach towards systematic, joint exploitation of multiple heterogeneous signal detection methods, data sources and other drug-related resources under a common, integrated framework. The approach relies on a multiagent system operating based on a collaborative agent interaction protocol, aiming to implement a comprehensive workflow that comprises of method selection and execution, as well as outcomes’ aggregation, filtering, ranking and annotation. This paper presents the design of the proposed multiagent system, discusses implementation issues and demonstrates the applicability of the proposed solution in an example signal detection scenario. This work constitutes a step towards large-scale, integrated and knowledge-intensive computational signal detection
    corecore